304 research outputs found

    Cooperative Local Caching under Heterogeneous File Preferences

    Full text link
    Local caching is an effective scheme for leveraging the memory of the mobile terminal (MT) and short range communications to save the bandwidth usage and reduce the download delay in the cellular communication system. Specifically, the MTs first cache in their local memories in off-peak hours and then exchange the requested files with each other in the vicinity during peak hours. However, prior works largely overlook MTs' heterogeneity in file preferences and their selfish behaviours. In this paper, we practically categorize the MTs into different interest groups according to the MTs' preferences. Each group of MTs aims to increase the probability of successful file discovery from the neighbouring MTs (from the same or different groups). Hence, we define the groups' utilities as the probability of successfully discovering the file in the neighbouring MTs, which should be maximized by deciding the caching strategies of different groups. By modelling MTs' mobilities as homogeneous Poisson point processes (HPPPs), we analytically characterize MTs' utilities in closed-form. We first consider the fully cooperative case where a centralizer helps all groups to make caching decisions. We formulate the problem as a weighted-sum utility maximization problem, through which the maximum utility trade-offs of different groups are characterized. Next, we study two benchmark cases under selfish caching, namely, partial and no cooperation, with and without inter-group file sharing, respectively. The optimal caching distributions for these two cases are derived. Finally, numerical examples are presented to compare the utilities under different cases and show the effectiveness of the fully cooperative local caching compared to the two benchmark cases

    LERC: Coordinated Cache Management for Data-Parallel Systems

    Full text link
    Memory caches are being aggressively used in today's data-parallel frameworks such as Spark, Tez and Storm. By caching input and intermediate data in memory, compute tasks can witness speedup by orders of magnitude. To maximize the chance of in-memory data access, existing cache algorithms, be it recency- or frequency-based, settle on cache hit ratio as the optimization objective. However, unlike the conventional belief, we show in this paper that simply pursuing a higher cache hit ratio of individual data blocks does not necessarily translate into faster task completion in data-parallel environments. A data-parallel task typically depends on multiple input data blocks. Unless all of these blocks are cached in memory, no speedup will result. To capture this all-or-nothing property, we propose a more relevant metric, called effective cache hit ratio. Specifically, a cache hit of a data block is said to be effective if it can speed up a compute task. In order to optimize the effective cache hit ratio, we propose the Least Effective Reference Count (LERC) policy that persists the dependent blocks of a compute task as a whole in memory. We have implemented the LERC policy as a memory manager in Spark and evaluated its performance through Amazon EC2 deployment. Evaluation results demonstrate that LERC helps speed up data-parallel jobs by up to 37% compared with the widely employed least-recently-used (LRU) policy

    Assessing Logical Puzzle Solving in Large Language Models: Insights from a Minesweeper Case Study

    Full text link
    Large Language Models (LLMs) have shown remarkable proficiency in language understanding and have been successfully applied to a variety of real-world tasks through task-specific fine-tuning or prompt engineering. Despite these advancements, it remains an open question whether LLMs are fundamentally capable of reasoning and planning, or if they primarily rely on recalling and synthesizing information from their training data. In our research, we introduce a novel task -- Minesweeper -- specifically designed in a format unfamiliar to LLMs and absent from their training datasets. This task challenges LLMs to identify the locations of mines based on numerical clues provided by adjacent opened cells. Successfully completing this task requires an understanding of each cell's state, discerning spatial relationships between the clues and mines, and strategizing actions based on logical deductions drawn from the arrangement of the cells. Our experiments, including trials with the advanced GPT-4 model, indicate that while LLMs possess the foundational abilities required for this task, they struggle to integrate these into a coherent, multi-step logical reasoning process needed to solve Minesweeper. These findings highlight the need for further research to understand and nature of reasoning capabilities in LLMs under similar circumstances, and to explore pathways towards more sophisticated AI reasoning and planning models.Comment: 24 pages, 5 figures, 3 table
    • …
    corecore